Lecture 9: Generalization

نویسنده

  • Roger Grosse
چکیده

When we train a machine learning model, we don’t just want it to learn to model the training data. We want it to generalize to data it hasn’t seen before. Fortunately, there’s a very convenient way to measure an algorithm’s generalization performance: we measure its performance on a held-out test set, consisting of examples it hasn’t seen before. If an algorithm works well on the training set but fails to generalize, we say it is overfitting. Improving generalization (or preventing overfitting) in neural nets is still somewhat of a dark art, but this lecture will cover a few simple strategies that can often help a lot.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Lecture 9: Boosting

Last week we discussed some algorithmic aspects of machine learning. We saw one very powerful family of learning algorithms, namely nonparametric methods that make very weak assumptions on that data-generating distribution, but consequently have poor generalization error/convergence rates. These methods tend to have low approximation errors, but extremely high estimation errors. Then we saw som...

متن کامل

Lecture 6: Rademacher Complexity

In this lecture, we discuss Rademacher complexity, which is a different (and often better) way to obtain generalization bounds for learning hypothesis classes.

متن کامل

CS 269 : Machine Learning Theory Lecture 14 : Generalization Error of Adaboost

In this lecture we will continue our discussion of the Adaboost algorithm and derive a bound on the generalization error. We saw last time that the training error decreases exponentially with respect to the number of rounds T . However, we also want to see the performance of this algorithm on new test data. Today we will show why the Adaboost algorithm generalizes so well and why it avoids over...

متن کامل

Lecture hall sequences, q-series, and asymmetric partition identities

We use generalized lecture hall partitions to discover a new pair of q-series identities. These identities are unusual in that they involve partitions into parts from asymmetric residue classes, much like the little Göllnitz partition theorems. We derive a two-parameter generalization of our identities that, surprisingly, gives new analytic counterparts of the little Göllnitz theorems. Finally,...

متن کامل

(67577) Introduction to Machine Learning Lecture 1 – Introduction and Gentle Start Lecture 2 – Bias Complexity Tradeoff Lecture 3(a) – Mdl Lecture 3(b) – Validation Lecture 3(c) – Compression Bounds

In the previous lectures we saw how to express prior knowledge by restricting ourselves to a finite hypothesis classes or by defining an order over countable hypothesis classes. In this lecture we show how one can learn even uncountable hypothesis classes by deriving compression bounds. Roughly speaking, we shall see that if a learning algorithm can express the output hypothesis using a small s...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2018